Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.319
Filtrar
1.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654234

RESUMO

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Assuntos
Depressão , Emoções , Apneia Obstrutiva do Sono , Humanos , Masculino , Pessoa de Meia-Idade , Apneia Obstrutiva do Sono/fisiopatologia , Apneia Obstrutiva do Sono/psicologia , Apneia Obstrutiva do Sono/complicações , Depressão/fisiopatologia , Depressão/psicologia , Depressão/complicações , Feminino , Adulto , Emoções/fisiologia , Polissonografia , Potenciais Evocados/fisiologia , Eletroencefalografia , Reconhecimento Facial/fisiologia , Potenciais Evocados Visuais/fisiologia , Expressão Facial
2.
Sci Rep ; 14(1): 9402, 2024 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658575

RESUMO

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Assuntos
Eletroencefalografia , Reconhecimento Facial , Humanos , Masculino , Feminino , Adulto , Reconhecimento Facial/fisiologia , Adulto Jovem , Estimulação Luminosa , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Face/fisiologia
3.
Sci Rep ; 14(1): 9418, 2024 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658628

RESUMO

Pupil contagion refers to the observer's pupil-diameter changes in response to changes in the pupil diameter of others. Recent studies on the other-race effect on pupil contagion have mainly focused on using eye region images as stimuli, revealing the effect in adults but not in infants. To address this research gap, the current study used whole-face images as stimuli to assess the pupil-diameter response of 5-6-month-old and 7-8-month-old infants to changes in the pupil-diameter of both upright and inverted unfamiliar-race faces. The study initially hypothesized that there would be no pupil contagion in either upright or inverted unfamiliar-race faces, based on our previous finding of pupil contagion occurring only in familiar-race faces among 5-6-month-old infants. Notably, the current results indicated that 5-6-month-old infants exhibited pupil contagion in both upright and inverted unfamiliar-race faces, while 7-8-month-old infants showed this effect only in upright unfamiliar-race faces. These results demonstrate that the face inversion effect of pupil contagion does not occur in 5-6-month-old infants, thereby suggesting the presence of the other-race effect in pupil contagion among this age group. Overall, this study provides the first evidence of the other-race effect on infants' pupil contagion using face stimuli.


Assuntos
Pupila , Humanos , Pupila/fisiologia , Lactente , Masculino , Feminino , Estimulação Luminosa , Reconhecimento Facial/fisiologia
4.
Cogn Res Princ Implic ; 9(1): 25, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38652383

RESUMO

The use of face coverings can make communication more difficult by removing access to visual cues as well as affecting the physical transmission of speech sounds. This study aimed to assess the independent and combined contributions of visual and auditory cues to impaired communication when using face coverings. In an online task, 150 participants rated videos of natural conversation along three dimensions: (1) how much they could follow, (2) how much effort was required, and (3) the clarity of the speech. Visual and audio variables were independently manipulated in each video, so that the same video could be presented with or without a superimposed surgical-style mask, accompanied by one of four audio conditions (either unfiltered audio, or audio-filtered to simulate the attenuation associated with a surgical mask, an FFP3 mask, or a visor). Hypotheses and analyses were pre-registered. Both the audio and visual variables had a statistically significant negative impact across all three dimensions. Whether or not talkers' faces were visible made the largest contribution to participants' ratings. The study identifies a degree of attenuation whose negative effects can be overcome by the restoration of visual cues. The significant effects observed in this nominally low-demand task (speech in quiet) highlight the importance of the visual and audio cues in everyday life and that their consideration should be included in future face mask designs.


Assuntos
Sinais (Psicologia) , Percepção da Fala , Humanos , Adulto , Feminino , Masculino , Adulto Jovem , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Máscaras , Adolescente , Fala/fisiologia , Comunicação , Pessoa de Meia-Idade , Reconhecimento Facial/fisiologia
5.
Artigo em Inglês | MEDLINE | ID: mdl-38607744

RESUMO

The purpose of this work is to analyze how new technologies can enhance clinical practice while also examining the physical traits of emotional expressiveness of face expression in a number of psychiatric illnesses. Hence, in this work, an automatic facial expression recognition system has been proposed that analyzes static, sequential, or video facial images from medical healthcare data to detect emotions in people's facial regions. The proposed method has been implemented in five steps. The first step is image preprocessing, where a facial region of interest has been segmented from the input image. The second component includes a classical deep feature representation and the quantum part that involves successive sets of quantum convolutional layers followed by random quantum variational circuits for feature learning. Here, the proposed system has attained a faster training approach using the proposed quantum convolutional neural network approach that takes [Formula: see text] time. In contrast, the classical convolutional neural network models have [Formula: see text] time. Additionally, some performance improvement techniques, such as image augmentation, fine-tuning, matrix normalization, and transfer learning methods, have been applied to the recognition system. Finally, the scores due to classical and quantum deep learning models are fused to improve the performance of the proposed method. Extensive experimentation with Karolinska-directed emotional faces (KDEF), Static Facial Expressions in the Wild (SFEW 2.0), and Facial Expression Recognition 2013 (FER-2013) benchmark databases and compared with other state-of-the-art methods that show the improvement of the proposed system.


Assuntos
Reconhecimento Facial , Saúde Mental , Humanos , Benchmarking , Bases de Dados Factuais , Redes Neurais de Computação
6.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38610510

RESUMO

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Acústica , Som , Emoções
7.
Sci Rep ; 14(1): 8121, 2024 04 07.
Artigo em Inglês | MEDLINE | ID: mdl-38582772

RESUMO

This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.


Assuntos
Lesões Acidentais , Reconhecimento Facial , Humanos , Redes Neurais de Computação , Reconhecimento Psicológico
8.
Neuroimage Clin ; 41: 103586, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38428325

RESUMO

BACKGROUND: Emotion processing deficits are known to accompany depressive symptoms and are often seen in stroke patients. Little is known about the influence of post-stroke depressive (PSD) symptoms and specific brain lesions on altered emotion processing abilities and how these phenomena develop over time. This potential relationship may impact post-stroke rehabilitation of neurological and psychosocial function. To address this scientific gap, we investigated the relationship between PSD symptoms and emotion processing abilities in a longitudinal study design from the first days post-stroke into the early chronic phase. METHODS: Twenty-six ischemic stroke patients performed an emotion processing task on videos with emotional faces ('happy,' 'sad,' 'anger,' 'fear,' and 'neutral') at different intensity levels (20%, 40%, 60%, 80%, 100%). Recognition accuracies and response times were measured, as well as scores of depressive symptoms (Montgomery-Åsberg Depression Rating Scale). Twenty-eight healthy participants matched in age and sex were included as a control group. Whole-brain support-vector regression lesion-symptom mapping (SVR-LSM) analyses were performed to investigate whether specific lesion locations were associated with the recognition accuracy of specific emotion categories. RESULTS: Stroke patients performed worse in overall recognition accuracy compared to controls, specifically in the recognition of happy, sad, and fearful faces. Notably, more depressed stroke patients showed an increased processing towards specific negative emotions, as they responded significantly faster to angry faces and recognized sad faces of low intensities significantly more accurately. These effects obtained for the first days after stroke partly persisted to follow-up assessment several months later. SVR-LSM analyses revealed that inferior and middle frontal regions (IFG/MFG) and insula and putamen were associated with emotion-recognition deficits in stroke. Specifically, recognizing happy facial expressions was influenced by lesions affecting the anterior insula, putamen, IFG, MFG, orbitofrontal cortex, and rolandic operculum. Lesions in the posterior insula, rolandic operculum, and MFG were also related to reduced recognition accuracy of fearful facial expressions, whereas recognition deficits of sad faces were associated with frontal pole, IFG, and MFG damage. CONCLUSION: PSD symptoms facilitate processing negative emotional stimuli, specifically angry and sad facial expressions. The recognition accuracy of different emotional categories was linked to brain lesions in emotion-related processing circuits, including insula, basal ganglia, IFG, and MFG. In summary, our study provides support for psychosocial and neural factors underlying emotional processing after stroke, contributing to the pathophysiology of PSD.


Assuntos
Depressão , Reconhecimento Facial , Humanos , Estudos Longitudinais , Emoções/fisiologia , Ira , Encéfalo/diagnóstico por imagem , Expressão Facial , Reconhecimento Facial/fisiologia
9.
J Neurosci ; 44(17)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38438256

RESUMO

Recognizing faces regardless of their viewpoint is critical for social interactions. Traditional theories hold that view-selective early visual representations gradually become tolerant to viewpoint changes along the ventral visual hierarchy. Newer theories, based on single-neuron monkey electrophysiological recordings, suggest a three-stage architecture including an intermediate face-selective patch abruptly achieving invariance to mirror-symmetric face views. Human studies combining neuroimaging and multivariate pattern analysis (MVPA) have provided convergent evidence of view selectivity in early visual areas. However, contradictory conclusions have been reached concerning the existence in humans of a mirror-symmetric representation like that observed in macaques. We believe these contradictions arise from low-level stimulus confounds and data analysis choices. To probe for low-level confounds, we analyzed images from two face databases. Analyses of image luminance and contrast revealed biases across face views described by even polynomials-i.e., mirror-symmetric. To explain major trends across neuroimaging studies, we constructed a network model incorporating three constraints: cortical magnification, convergent feedforward projections, and interhemispheric connections. Given the identified low-level biases, we show that a gradual increase of interhemispheric connections across network-layers is sufficient to replicate view-tuning in early processing stages and mirror-symmetry in later stages. Data analysis decisions-pattern dissimilarity measure and data recentering-accounted for the inconsistent observation of mirror-symmetry across prior studies. Pattern analyses of human fMRI data (of either sex) revealed biases compatible with our model. The model provides a unifying explanation of MVPA studies of viewpoint selectivity and suggests observations of mirror-symmetry originate from ineffectively normalized signal imbalances across different face views.


Assuntos
Reconhecimento Facial , Humanos , Masculino , Feminino , Reconhecimento Facial/fisiologia , Adulto , Neuroimagem/métodos , Estimulação Luminosa/métodos , Modelos Neurológicos , Córtex Visual/fisiologia , Córtex Visual/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Adulto Jovem
10.
Sci Rep ; 14(1): 5459, 2024 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-38443378

RESUMO

Roboticists often imbue robots with human-like physical features to increase the likelihood that they are afforded benefits known to be associated with anthropomorphism. Similarly, deepfakes often employ computer-generated human faces to attempt to create convincing simulacra of actual humans. In the present work, we investigate whether perceivers' higher-order beliefs about faces (i.e., whether they represent actual people or android robots) modulate the extent to which perceivers deploy face-typical processing for social stimuli. Past work has shown that perceivers' recognition performance is more impacted by the inversion of faces than objects, thus highlighting that faces are processed holistically (i.e., as Gestalt), whereas objects engage feature-based processing. Here, we use an inversion task to examine whether face-typical processing is attenuated when actual human faces are labeled as non-human (i.e., android robot). This allows us to employ a task shown to be differentially sensitive to social (i.e., faces) and non-social (i.e., objects) stimuli while also randomly assigning face stimuli to seem real or fake. The results show smaller inversion effects when face stimuli were believed to represent android robots compared to when they were believed to represent humans. This suggests that robots strongly resembling humans may still fail to be perceived as "social" due pre-existing beliefs about their mechanistic nature. Theoretical and practical implications of this research are discussed.


Assuntos
Reconhecimento Facial , Robótica , Humanos , Percepção Social , Inversão Cromossômica , Exame Físico
11.
Pediatr Blood Cancer ; 71(6): e30943, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38470289

RESUMO

BACKGROUND/OBJECTIVES: Survivors of pediatric brain tumors (SPBT) experience significant social challenges, including fewer friends and greater isolation than peers. Difficulties in face processing and visual social attention have been implicated in these outcomes. This study evaluated facial expression recognition (FER), social attention, and their associations with social impairments in SPBT. METHODS: SPBT (N = 54; ages 7-16) at least 2 years post treatment completed a measure of FER, while parents completed measures of social impairment. A subset (N = 30) completed a social attention assessment that recorded eye gaze patterns while watching videos depicting pairs of children engaged in joint play. Social Prioritization scores were calculated, with higher scores indicating more face looking. Correlations and regression analyses evaluated associations between variables, while a path analysis modeling tool (PROCESS) evaluated the indirect effects of Social Prioritization on social impairments through emotion-specific FER. RESULTS: Poorer recognition of angry and sad facial expressions was significantly correlated with greater social impairment. Social Prioritization was positively correlated with angry FER but no other emotions. Social Prioritization had significant indirect effects on social impairments through angry FER. CONCLUSION: Findings suggest interventions aimed at improving recognition of specific emotions may mitigate social impairments in SPBT. Further, reduced social attention (i.e., diminished face looking) could be a factor in reduced face processing ability, which may result in social impairments. Longitudinal research is needed to elucidate temporal associations between social attention, face processing, and social impairments.


Assuntos
Atenção , Neoplasias Encefálicas , Sobreviventes de Câncer , Emoções , Expressão Facial , Reconhecimento Facial , Humanos , Feminino , Masculino , Criança , Adolescente , Neoplasias Encefálicas/psicologia , Sobreviventes de Câncer/psicologia , Seguimentos
12.
Biol Psychol ; 187: 108771, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38460756

RESUMO

The ability to detect and recognize facial emotions emerges in childhood and is important for understanding social cues, but we know relatively little about how individual differences in temperament may influence early emotional face processing. We used a sample of 419 children (Mage = 10.57 years, SD = 1.75; 48% female; 77% White) to examine the relation between temperamental shyness and early stages of emotional face processing (assessed using the P100 and N170 event-related potentials) during different facial expressions (neutral, anger, fear, and happy). We found that higher temperamental shyness was related to greater P100 activation to faces expressing anger and fear relative to neutral faces. Further, lower temperamental shyness was related to greater N170 activation to faces expressing anger and fear relative to neutral faces. There were no relations between temperamental shyness and neural activation to happy faces relative to neutral faces for P100 or N170, suggesting specificity to faces signaling threat. We discuss findings in the context of understanding the early processing of facial emotional display of threat among shy children.


Assuntos
Reconhecimento Facial , Timidez , Criança , Humanos , Feminino , Masculino , Reconhecimento Facial/fisiologia , Emoções/fisiologia , Potenciais Evocados/fisiologia , Ira , Expressão Facial , Eletroencefalografia
13.
J Integr Neurosci ; 23(3): 48, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38538212

RESUMO

In the context of perceiving individuals within and outside of social groups, there are distinct cognitive processes and mechanisms in the brain. Extensive research in recent years has delved into the neural mechanisms that underlie differences in how we perceive individuals from different social groups. To gain a deeper understanding of these neural mechanisms, we present a comprehensive review from the perspectives of facial recognition and memory, intergroup identification, empathy, and pro-social behavior. Specifically, we focus on studies that utilize functional magnetic resonance imaging (fMRI) and event-related potential (ERP) techniques to explore the relationship between brain regions and behavior. Findings from fMRI studies reveal that the brain regions associated with intergroup differentiation in perception and behavior do not operate independently but instead exhibit dynamic interactions. Similarly, ERP studies indicate that the amplitude of neural responses shows various combinations in relation to perception and behavior.


Assuntos
Empatia , Reconhecimento Facial , Humanos , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Mapeamento Encefálico , Comportamento Social
14.
BMC Psychiatry ; 24(1): 226, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38532335

RESUMO

BACKGROUND: Patients with schizophrenia (SCZ) exhibit difficulties deficits in recognizing facial expressions with unambiguous valence. However, only a limited number of studies have examined how these patients fare in interpreting facial expressions with ambiguous valence (for example, surprise). Thus, we aimed to explore the influence of emotional background information on the recognition of ambiguous facial expressions in SCZ. METHODS: A 3 (emotion: negative, neutral, and positive) × 2 (group: healthy controls and SCZ) experimental design was adopted in the present study. The experimental materials consisted of 36 images of negative emotions, 36 images of neutral emotions, 36 images of positive emotions, and 36 images of surprised facial expressions. In each trial, a briefly presented surprised face was preceded by an affective image. Participants (36 SCZ and 36 healthy controls (HC)) were required to rate their emotional experience induced by the surprised facial expressions. Participants' emotional experience was measured using the 9-point rating scale. The experimental data have been analyzed by conducting analyses of variances (ANOVAs) and correlation analysis. RESULTS: First, the SCZ group reported a more positive emotional experience under the positive cued condition compared to the negative cued condition. Meanwhile, the HC group reported the strongest positive emotional experience in the positive cued condition, a moderate experience in the neutral cued condition, and the weakest in the negative cue condition. Second, the SCZ (vs. HC) group showed longer reaction times (RTs) for recognizing surprised facial expressions. The severity of schizophrenia symptoms in the SCZ group was negatively correlated with their rating scores for emotional experience under neutral and positive cued condition. CONCLUSIONS: Recognition of surprised facial expressions was influenced by background information in both SCZ and HC, and the negative symptoms in SCZ. The present study indicates that the role of background information should be fully considered when examining the ability of SCZ to recognize ambiguous facial expressions.


Assuntos
Reconhecimento Facial , Esquizofrenia , Humanos , Emoções , Reconhecimento Psicológico , Expressão Facial , China
15.
J Anim Sci ; 1022024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-38477672

RESUMO

The accurate identification of individual sheep is a crucial prerequisite for establishing digital sheep farms and precision livestock farming. Currently, deep learning technology provides an efficient and non-contact method for sheep identity recognition. In particular, convolutional neural networks can be used to learn features of sheep faces to determine their corresponding identities. However, the existing sheep face recognition models face problems such as large model size, and high computational costs, making it difficult to meet the requirements of practical applications. In response to these issues, we introduce a lightweight sheep face recognition model called YOLOv7-Sheep Face Recognition (YOLOv7-SFR). Considering the labor-intensive nature associated with manually capturing sheep face images, we developed a face image recording channel to streamline the process and improve efficiency. This study collected facial images of 50 Small-tailed Han sheep through a recording channel. The experimental sheep ranged in age from 1 to 3 yr, with an average weight of 63.1 kg. Employing data augmentation methods further enhanced the original images, resulting in a total of 22,000 sheep face images. Ultimately, a sheep face dataset was established. To achieve lightweight improvement and improve the performance of the recognition model, a variety of improvement strategies were adopted. Specifically, we introduced the shuffle attention module into the backbone and fused the Dyhead module with the model's detection head. By combining multiple attention mechanisms, we improved the model's ability to learn target features. Additionally, the traditional convolutions in the backbone and neck were replaced with depthwise separable convolutions. Finally, leveraging knowledge distillation, we enhanced its performance further by employing You Only Look Once version 7 (YOLOv7) as the teacher model and YOLOv7-SFR as the student model. The training results indicate that our proposed approach achieved the best performance on the sheep face dataset, with a mean average precision@0.5 of 96.9%. The model size and average recognition time were 11.3 MB and 3.6 ms, respectively. Compared to YOLOv7-tiny, YOLOv7-SFR showed a 2.1% improvement in mean average precision@0.5, along with a 5.8% reduction in model size and a 42.9% reduction in average recognition time. The research results are expected to drive the practical applications of sheep face recognition technology.


Accurate identification of individual sheep is a crucial prerequisite for establishing digital sheep farms and precision livestock farming. In this study, we developed a lightweight sheep face recognition model, YOLOv7-SFR. Utilizing a face image recording channel, we efficiently collected facial images from 50 experimental sheep, resulting in a comprehensive sheep face dataset. Training results demonstrated that YOLOv7-SFR surpassed state-of-the-art lightweight sheep face recognition models, achieving a mean average precision@0.5 of 96.9%. Notably, the model size and average recognition time of YOLOv7-SFR were merely 11.3 MB and 3.6 ms, respectively. In summary, YOLOv7-SFR strikes an optimal balance between performance, model size, and recognition speed, offering promising practical applications for sheep face recognition technology. This study employs deep learning for sheep face recognition tasks, ensuring the welfare of sheep in the realm of digital agriculture and automation practices.


Assuntos
Reconhecimento Facial , Trabalho de Parto , Animais , Ovinos , Gravidez , Feminino , Agricultura , Fazendas , Gado
16.
Math Biosci Eng ; 21(3): 4165-4186, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38549323

RESUMO

In recent years, the extensive use of facial recognition technology has raised concerns about data privacy and security for various applications, such as improving security and streamlining attendance systems and smartphone access. In this study, a blockchain-based decentralized facial recognition system (DFRS) that has been designed to overcome the complexities of technology. The DFRS takes a trailblazing approach, focusing on finding a critical balance between the benefits of facial recognition and the protection of individuals' private rights in an era of increasing monitoring. First, the facial traits are segmented into separate clusters which are maintained by the specialized node that maintains the data privacy and security. After that, the data obfuscation is done by using generative adversarial networks. To ensure the security and authenticity of the data, the facial data is encoded and stored in the blockchain. The proposed system achieves significant results on the CelebA dataset, which shows the effectiveness of the proposed approach. The proposed model has demonstrated enhanced efficacy over existing methods, attaining 99.80% accuracy on the dataset. The study's results emphasize the system's efficacy, especially in biometrics and privacy-focused applications, demonstrating outstanding precision and efficiency during its implementation. This research provides a complete and novel solution for secure facial recognition and data security for privacy protection.


Assuntos
Blockchain , Aprendizado Profundo , Reconhecimento Facial , Humanos , Privacidade , Fenótipo
17.
Acta Psychol (Amst) ; 245: 104237, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38537601

RESUMO

Developmental prosopagnosia (DP) is a condition that indicates the inability to recognize individuals by their faces from birth, without any history of brain damage. The assessment of face recognition ability and diagnosis of DP involve the use of face tests such as the Cambridge Face Memory Test (CFMT) and the Cambridge Face Perception Test, along with self-reported measures like the 20-Item Prosopagnosia Index (PI20). Face recognition accuracy is affected by anxiety. However, previous studies on the relationship between face recognition ability and anxiety have not used the PI20 measure. This study aimed to investigate the relationship between self-reported measures of face recognition ability and anxiety tendencies among healthy young individuals for DP diagnosis and its implications. We used a face recognition test, involving the PI20, CFMT, Visual Perception Test for Agnosia-Famous Face Test (VPTA-FFT), and State-Trait Anxiety Inventory (STAI). We assessed the performance of 116 Japanese young adults (75 females, median age of 20.7 years, with a standard deviation of 1.2). Subsequently, we conducted a statistical analysis to examine the relationship between the outcomes of the face recognition tests and STAI scores using Pearson correlation analysis and single correlation coefficients. The results showed a positive correlation between state anxiety and PI20 (r = 0.308, p = 0.007), and a weak positive correlation was also observed between trait anxiety and PI20 (r = 0.268, p = 0.04). In contrast, there was no correlation between CFMT and VPTA-FFT with respect to STAI. The results of the hierarchical multiple regression analysis also suggested that the correlation between the performance on the PI20 (self-report) and objective measures of face recognition performance (the CFMT and the VPTA-FFT) are driven by differences in anxiety. This study is the first to explore the relationship between face recognition abilities and anxiety using the PI20 self-report measure. There are implications for future research on the diagnosis of DP and the relationship between anxiety and face recognition.


Assuntos
Reconhecimento Facial , Prosopagnosia , Feminino , Adulto Jovem , Humanos , Adulto , Prosopagnosia/diagnóstico , Reconhecimento Psicológico , Ansiedade/diagnóstico , Autorrelato , Reconhecimento Visual de Modelos
18.
Neuroimage ; 291: 120591, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38552812

RESUMO

Functional imaging has helped to understand the role of the human insula as a major processing network for integrating input with the current state of the body. However, these studies remain at a correlative level. Studies that have examined insula damage show lesion-specific performance deficits. Case reports have provided anecdotal evidence for deficits following insula damage, but group lesion studies offer a number of advances in providing evidence for functional representation of the insula. We conducted a systematic literature search to review group studies of patients with insula damage after stroke and identified 23 studies that tested emotional processing performance in these patients. Eight of these studies assessed emotional processing of visual (most commonly IAPS), auditory (e.g., prosody), somatosensory (emotional touch) and autonomic function (heart rate variability). Fifteen other studies looked at social processing, including emotional face recognition, gaming tasks and tests of empathy. Overall, there was a bias towards testing only patients with right-hemispheric lesions, making it difficult to consider hemisphere specificity. Although many studies included an overlay of lesion maps to characterise their patients, most did not differentiate lesion statistics between insula subunits and/or applied voxel-based associations between lesion location and impairment. This is probably due to small group sizes, which limit statistical comparisons. We conclude that multicentre analyses of lesion studies with comparable patients and performance tests are needed to definitively test the specific function of parts of the insula in emotional processing and social interaction.


Assuntos
Reconhecimento Facial , Acidente Vascular Cerebral , Humanos , Imageamento por Ressonância Magnética/métodos , Emoções/fisiologia , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/diagnóstico por imagem , Empatia , Mapeamento Encefálico/métodos
19.
Sci Rep ; 14(1): 6687, 2024 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-38509151

RESUMO

Congenital Prosopagnosia (CP) is an innate impairment in face perception with heterogeneous characteristics. It is still unclear if and to what degree holistic processing of faces is disrupted in CP. Such disruption would be expected to lead to a focus on local features of the face. In this study, we used binocular rivalry (BR) to implicitly measure face perception in conditions that favour holistic or local processing. The underlying assumption is that if stimulus saliency affects the perceptual dominance of a given stimulus in BR, one can deduce how salient a stimulus is for a given group (here: participants with and without CP) based on the measured perceptual dominance. A further open question is whether the deficit in face processing in CP extends to the processing of the facial display of emotions. In experiment 1, we compared predominance of upright and inverted faces displaying different emotions (fearful, happy, neutral) vs. houses between participants with CP (N = 21) and with normal face perception (N = 21). The results suggest that CP observers process emotions in faces automatically but rely more on local features than controls. The inversion of faces, which is supposed to disturb holistic processing, affected controls in a more pronounced way than participants with CP. In experiment 2, we introduced the Thatcher effect in BR by inverting the eye and mouth regions of the presented faces in the hope of further increasing the effect of face inversion. However, our expectations were not borne out by the results. Critically, both experiments showed that inversion effects were more pronounced in controls than in CP, suggesting that holistic face processing is less relevant in CP. We find BR to be a useful implicit test for assessing visual processing specificities in neurological participants.


Assuntos
Reconhecimento Facial , Prosopagnosia , Prosopagnosia/congênito , Humanos , Prosopagnosia/psicologia , Reconhecimento Visual de Modelos , Percepção Visual , Estimulação Luminosa
20.
Cortex ; 173: 333-338, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38460488

RESUMO

Developmental prosopagnosia (DP) is characterised by difficulties recognising face identities and is associated with diverse co-occurring object recognition difficulties. The high co-occurrence rate and heterogeneity of associated difficulties in DP is an intrinsic feature of developmental conditions, where co-occurrence of difficulties is the rule, rather than the exception. However, despite its name, cognitive and neural theories of DP rarely consider the developmental context in which these difficulties occur. This leaves a large gap in our understanding of how DP emerges in light of the developmental trajectory of face recognition. Here, we argue that progress in the field requires re-considering the developmental origins of differences in face recognition abilities, rather than studying the end-state alone. In practice, considering development in DP necessitates a re-evaluation of current approaches in recruitment, design, and analyses.


Assuntos
Reconhecimento Facial , Prosopagnosia , Humanos , Percepção Visual , Reconhecimento Visual de Modelos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA